A search engine is a software system that provides to , and other relevant information on the Web in response to a user's web query. The user enters a query in a web browser or a mobile app, and the search results are typically presented as a list of hyperlinks accompanied by textual summaries and images. Users also have the option of limiting a search to specific types of results, such as images, videos, or news.
For a search provider, its software engine is part of a distributed computing system that can encompass many throughout the world. The speed and accuracy of an engine's response to a query are based on a complex system of indexing that is continuously updated by automated . This can include data mining the Computer file and stored on , although some content is deep web to crawlers.
There have been many search engines since the dawn of the Web in the 1990s, however, Google Search became the dominant one in the 2000s and has remained so. As of May 2025, according to StatCounter, Google holds approximately 89–90 % of the worldwide search share, with competitors trailing far behind: Microsoft Bing (~4 %), Yandex Search (~2.5 %), Yahoo! (~1.3 %), DuckDuckGo (~0.8 %), and Baidu (~0.7 %).StatCounter Global Stats – Search Engine Market Share (May 2025) Notably, this marks the first time in over a decade that Google's share has fallen below the 90 % threshold. The business of improving their visibility in search results, known as marketing and optimization, has thus largely focused on Google.
+ Timeline (full list) | ||
1993 | W3Catalog | |
ALIWEB | ||
JumpStation | ||
WWW Worm | ||
1994 | WebCrawler | |
Go.com | , redirects to Disney | |
Lycos | ||
Infoseek | , redirects to Disney | |
1995 | Yahoo! Search | , initially a search function for Yahoo! Directory |
Daum | ||
Search.ch | ||
Magellan | ||
Excite | ||
MetaCrawler | ||
AltaVista | , acquired by Yahoo! in 2003, since 2013 redirects to Yahoo! | |
SAPO | ||
1996 | RankDex | , incorporated into Baidu in 2000 |
Dogpile | ||
HotBot | (used Inktomi search technology) | |
Ask Jeeves | (rebranded ask.com) | |
1997 | AOL NetFind | (rebranded AOL Search since 1999) |
goo.ne.jp | ||
Northern Light | ||
Yandex | ||
1998 | Google Search | |
Ixquick | as Startpage.com | |
MSN Search | as Bing | |
empas | (merged with NATE) | |
1999 | AlltheWeb | (URL redirected to Yahoo!) |
GenieKnows | , rebranded Yellowee (was redirecting to justlocalbusiness.com) | |
Naver | ||
Teoma | (redirect to Ask.com) | |
2000 | Baidu | |
Exalead | ||
Gigablast | ||
2001 | Kartoo | |
2003 | Info.com | |
2004 | A9.com | |
Yippy | (redirect to DuckDuckGo) | |
Mojeek | ||
Sogou | ||
2005 | SearchMe | |
KidzSearch | , Google Search | |
2006 | Soso | , merged with Sogou |
Quaero | ||
Search.com | ||
ChaCha | ||
Ask.com | ||
Live Search | as Bing, rebranded MSN Search | |
2007 | wikiseek | |
Sproose | ||
Wikia Search | ||
Blackle.com | , Google Search | |
2008 | Powerset | (redirects to Bing) |
Picollator | ||
Viewzi | ||
Boogami | ||
LeapFish | ||
Forestle | (redirects to Ecosia) | |
DuckDuckGo | ||
TinEye | ||
2009 | Bing | , rebranded Live Search |
Yebol | ||
Scout (Goby) | ||
NATE | ||
Ecosia | ||
Startpage.com | , sister engine of Ixquick | |
2010 | Blekko | , sold to IBM |
Cuil | ||
Yandex (English) | ||
Parsijoo | ||
2011 | YaCy | , Peer-to-peer |
2012 | Volunia | |
2013 | Qwant | |
2014 | Egerin | , Kurdish / Sorani |
Swisscows | ||
Searx | ||
2015 | Yooz | |
Cliqz | ||
2016 | Kiddle | , Google Search |
2017 | Presearch | |
2018 | Kagi | |
2020 | Petal Search | |
2021 | Brave Search | |
Queye | ||
You.com |
Link analysis eventually became a crucial component of search engines through algorithms such as Hyper Search and PageRank.
Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of edited by Tim Berners-Lee and hosted on the CERN Web server. One snapshot of the list in 1992 remains, but as more and more web servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!".
The first tool used for searching content (as opposed to users) on the Internet was Archie. The name stands for "archive" without the "v". It was created by Alan Emtage, computer science student at McGill University in Montreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie Search Engine did not index the contents of these sites since the amount of data was so limited it could be readily searched manually.
The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to the Archie Comics series, "Veronica Lodge" and "Jughead Jones" are characters in the series, thus referencing their predecessor.
In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993.
In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by Webmaster of the existence at each site of an index file in a particular format.
JumpStation (created in December 1993 by Jonathon Fletcher) used a web crawler to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the the crawler encountered.
One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any web page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor.
The first popular search engine on the Web was Yahoo! Search. The first product from Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory. It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages.
Soon after, a number of search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could also browse the directory instead of doing a keyword-based search.
In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page rankingGreenberg, Andy, "The Man Who's Beating Google", Forbes magazine, October 5, 2009Yanhong Li, "Toward a Qualitative Search Engine", IEEE Internet Computing, vol. 2, no. 4, pp. 24–29, July/Aug. 1998, "About: RankDex", rankdex.com and received a US patent for the technology.USPTO, "Hypertext Document Retrieval System and Method", US Patent number: 5920859, Inventor: Yanhong Li, Filing date: Feb 5, 1997, Issue date: Jul 6, 1999 It was the first search engine that used to measure the quality of websites it was indexing, predating the very similar algorithm patent filed by Google two years later in 1998. Larry Page referenced Li's work in some of his U.S. patents for PageRank. Li later used his RankDex technology for the Baidu search engine, which was founded by him in China and launched in 2000.
In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.
Google adopted the idea of selling search terms in 1998 from a small search engine company named goto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet.
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in March 2000.
By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions.
Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings from Looksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot).
Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology.
active search engine crawlers include those of Google, [[Sogou]], Baidu, Bing, [[Gigablast]], [[Mojeek]], [[DuckDuckGo]] and [[Yandex]].
Web search engines get their information by Web crawler from site to site. The "spider" checks for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to be indexed depending on many factors, such as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "No web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially".Dasgupta, Anirban; Ghosh, Arpita; Kumar, Ravi; Olston, Christopher; Pandey, Sandeep; and Tomkins, Andrew. The Discoverability of the Web. http://www.arpitaghosh.com/papers/discoverability.pdf
Indexing means associating words and other definable tokens found on web pages to their domain names and HTML-based fields. The associations are stored in a public database and accessible through web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible. Some of the techniques for indexing, and caching are trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis.
Between visits by the spider, the cached version of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as a web proxy instead. In this case, the page may differ from the search terms indexed. The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form of linkrot.
Typically, when a user enters a query into a search engine it is a few keywords.Jansen, B. J., Spink, A., and Saracevic, T. 2000. Real life, real users, and real needs: A study and analysis of user queries on the web. Information Processing & Management. 36(2), 207–227. The inverted index already has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must be weighting according to information in the indexes. Then the top search result item requires the lookup, reconstruction, and markup of the snippets showing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing.
Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create by filtering and weighting while refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. It is also possible to weight by date because each page has a modification time. Most search engines support the use of the Boolean operators AND, OR and NOT to help end users refine the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also Concept search where the research involves using statistical analysis on pages containing the words or phrases the user searches for.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank order the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to paid inclusion in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.
The search engine Qwant is based in Paris, France, where it attracts most of its 50 million monthly registered users from.
Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results. Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries.
Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons.
Several scholars have studied the cultural changes triggered by search engines, and the representation of certain controversial topics in their results, such as terrorism in Ireland, climate change denial,Hiroko Tabuchi, " How Climate Change Deniers Rise to the Top in Google Searches", The New York Times, Dec. 29, 2017. Retrieved November 14, 2018. and conspiracy theories.
While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects like Muxlim (a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google, and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith.
Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website's ranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller of Google has stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking.
The primary method of storing and retrieving files was via the File Transfer Protocol (FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol.
Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them.
Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file.
Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database.
Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained.
In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways.
ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos.
Excite was the first serious commercial search engine which launched in 1995. It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt and InfoSpace bought Excite for $10 million.
Some of the first analysis of web searching was conducted on search logs from ExciteJansen, B. J., Spink, A., Bateman, J., and Saracevic, T. 1998. Real life information retrieval: A study of user queries on the web. SIGIR Forum, 32(1), 5 -17.
As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory.
The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites.
The process begins when a user enters a query statement into the system through the interface provided.
There are basically three types of search engines: Those that are powered by robots (called web crawler; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two.
Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine.
Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.
In both cases, when a user queries a search engine to locate information, they're actually searching through the index that the search engine has created —they are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated.
So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for.
One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing.
Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking.
Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google and Yahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity.
Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings.
Local search
Market share
[[Google|Google Search]] is by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines were [[Bing|Microsoft Bing]] at 4%, [[Yandex|Yandex Search]] at 2%, Yahoo! at 1%. Other search engines not listed have less than a 3% market share. In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice.
"version": 2,
"width": 650,
"height": 260,
"data": [
{
"name": "table",
"values": [
{
"x": "Google 90.6%",
"y": 92.01
},
{
"x": "Bing 3.25%",
"y": 3.25
},
{
"x": "Yahoo! 3.12%",
"y": 3.12
},
{
"x": "Baidu 1.17%",
"y": 1.17
},
{
"x": "Yandex 1.06%",
"y": 1.06
},
{
"x": "DuckDuckGo 0.68%",
"y": 0.68
}
]
}
],
"scales": [
{
"name": "x",
"type": "ordinal",
"range": "width",
"zero": false,
"domain": {
"data": "table",
"field": "x"
}
},
{
"name": "y",
"type": "linear",
"range": "height",
"nice": true,
"domain": {
"data": "table",
"field": "y"
}
}
],
"axes": [
{
"type": "x",
"scale": "x"
},
{
"type": "y",
"scale": "y"
}
],
"marks": [
{
"type": "rect",
"from": {
"data": "table"
},
"properties": {
"enter": {
"x": {
"scale": "x",
"field": "x"
},
"y": {
"scale": "y",
"field": "y"
},
"y2": {
"scale": "y",
"value": 0
},
"fill": {
"value": "steelblue"
},
"width": {
"scale": "x",
"band": "true",
"offset": -1
}
}
}
}
],
"padding": {
"top": 30,
"bottom": 30,
"left": 50,
"right": 50
}
}
Russia and East Asia
Europe
Search engine bias
Customized results and filter bubbles
Religious search engines
Search engine submission
Comparison to social bookmarking
Technology
Archie
Veronica
The Lone Wanderer
Excite
Yahoo!
Lycos
Types of web search engines
Conventional librarycatalog Search by keyword, title, author, etc. Text-based Google, Bing, Yahoo! Search by keywords. Limited search using queries in natural language. voice search Google, Bing, Yahoo! Search by keywords. Limited search using queries in natural language. Multimedia search QBIC, WebSeek, SaFe Search by visual appearance (shapes, colors,..) Q/A Stack Exchange, NSIR Search in (restricted) natural language Clustering Systems Vivisimo, Clusty Research Systems Lemur, Nutch
See also
Further reading
External links
|
|